首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   96152篇
  免费   11640篇
  国内免费   7691篇
电工技术   6222篇
技术理论   9篇
综合类   9417篇
化学工业   5246篇
金属工艺   3197篇
机械仪表   6577篇
建筑科学   2939篇
矿业工程   2844篇
能源动力   1162篇
轻工业   6027篇
水利工程   1601篇
石油天然气   3696篇
武器工业   1540篇
无线电   15317篇
一般工业技术   6078篇
冶金工业   2885篇
原子能技术   1004篇
自动化技术   39722篇
  2024年   214篇
  2023年   1346篇
  2022年   2467篇
  2021年   3233篇
  2020年   3276篇
  2019年   2560篇
  2018年   2327篇
  2017年   2974篇
  2016年   3333篇
  2015年   3920篇
  2014年   6027篇
  2013年   5668篇
  2012年   6863篇
  2011年   7567篇
  2010年   5699篇
  2009年   5735篇
  2008年   6222篇
  2007年   7205篇
  2006年   6246篇
  2005年   5562篇
  2004年   4728篇
  2003年   4133篇
  2002年   3262篇
  2001年   2538篇
  2000年   2205篇
  1999年   1630篇
  1998年   1356篇
  1997年   1161篇
  1996年   999篇
  1995年   873篇
  1994年   748篇
  1993年   599篇
  1992年   452篇
  1991年   390篇
  1990年   340篇
  1989年   291篇
  1988年   207篇
  1987年   127篇
  1986年   114篇
  1985年   155篇
  1984年   116篇
  1983年   139篇
  1982年   110篇
  1981年   73篇
  1980年   50篇
  1979年   58篇
  1978年   29篇
  1977年   40篇
  1976年   23篇
  1974年   12篇
排序方式: 共有10000条查询结果,搜索用时 153 毫秒
41.
随着海洋资源勘探和海洋污染物监控工作的开展,水文数据的监测和采集等已经成为重要的研究方向。其中,水下无线传感器网络在水文数据采集过程中起着举足轻重的作用。本文研究的是水下无线传感器二维监测网络模型中,传感器节点数据采集的问题,其设计方法是通过自组织映射(Self-organizing mapping,SOM)对传感器节点进行路径最优化处理,结合优化的路径图形和K-means算法找到路径内部聚合点,利用聚合点和传感器的节点得到传感器通信半径内的数据采集点,最后通过SOM得到水下机器人(Autonomous underwater vehicle,AUV)到各个数据采集点采集数据的最优路径。经过实验验证,在水下1 200 m×1 750 m范围内布置52个传感器节点的情景下,数据采集点相比于传感器节点路径规划采用相同的采集顺序得到的路径优化了6.7%;对数据采集点重新进行自组织路径规划得到的路径比传感器结点路径的最优解提高了12.2%。增加传感器节点的数量,其结果也大致相同,因此采用该方法可以提高水下机器人采集数据的效率。  相似文献   
42.
5G-NR和B5G系统都需能高效地支持小微数据包(块)的传输,且在不同的无线传输技术(如宽带或超低时延)之间快速灵活地切换。系统梳理和比较和小微数据包(块)相关的多种无线传输技术,如两步RACH、预配置授权和公共资源竞争等,分析它们对3GPP各个系统和各个RAN协议的影响和复杂度优劣等,并进一步展望其未来发展和应用趋势。  相似文献   
43.
44.
ABSTRACT

It is important to perform neutron transport simulations with accurate nuclear data in the neutronics design of a fusion reactor. However, absolute values of large-angle scattering cross sections vary among nuclear data libraries even for well-examined nuclide of iron. Benchmark experiments focusing on large-angle scattering cross sections were thus performed to confirm the correctness of nuclear data libraries. The series benchmark experiments were performed at a DT neutron source facility, OKTAVIAN of Osaka University, Japan, by the unique experimental system established by the authors’ group, which can extract only the contribution of large-angle scattering reactions. This system consists of two shadow bars, target plate (iron), and neutron detector (niobium). Two types of shadow bars were used and four irradiations were conducted for one experiment, so that contribution of room-return neutrons was effectively removed and only large-angle scattering neutrons were extracted from the measured four Nb reaction rates. The obtained experimental results were compared with calculations for five nuclear data libraries including JENDL-4.0, JEFF.-3.3, FENDL-3.1, ENDF/B- VII, and recently released ENDF/B-VIII. It was found from the comparison that ENDF/B-VIII showed the best result, though ENDF/B-VII showed overestimation and others are in large underestimation at 14 MeV.  相似文献   
45.
Sorting-based reversible data hiding (RDH) methods like pixel-value-ordering (PVO) can predict pixel values accurately and achieve an extremely low distortion on the embedded image. However, the excellent performance of these methods was not well explained in previous works, and there are unexploited common points among them. In this paper, we propose a general multi-predictor (GMP) framework to summarize PVO-based RDH methods and explain their high prediction accuracy. Moreover, by utilizing the proposed GMP framework, a more efficient sorting-based RDH method is given as an example to show the generality and applicability of our framework. Comparing with other PVO-based methods, the proposed example method can achieve significant improvement in embedding performance. It is hopeful that more efficient sorting-based RDH algorithms can be designed according to our proposed framework by designing better predictors and their combination methods.  相似文献   
46.
Production and world consumption of spices are constantly increasing. Although the antimicrobial properties of some spices are well documented, their use in the agri-food industry is also responsible for microbial contamination and spoilage. Bacterial spores introduced by spices can withstand different preparation processes, particularly thermal treatments, leading to food alterations during storage. This review brings together data from the literature about the prevalence and concentrations of spore-forming bacteria in all commercially available spices. The sporeformers found in spices belong mainly to the genera Bacillus and Clostridium. Such contaminations are very common and sometimes reach high levels, as in pepper and turmeric. Bacillus licheniformis and Bacillus cereus are the most frequently detected species. Studying the harvesting, processing, and storage procedures for spices provides elements to explain why high prevalence and concentrations are observed. Spices are mostly produced in developing countries on small farms using traditional production methods. Spices become contaminated by bacterial spores in two main ways: by contact with soil during harvesting or drying, as for pepper, or by cross-contamination during the water-cooking step, as for turmeric. From these observations, we propose some recommendations. Different methods that can be used to eliminate bacterial spores from spices are presented indicating their efficiency and the limitations of their use.  相似文献   
47.
The demand for food production has been constantly increasing due to rising population. In developed countries, for example, the emergence of regional production of old grains that are rarely utilized, along with the production of commonly consumed grains, has gained importance in recent years. These grains, known collectively as ancient or heirloom grains, have offered both farmers and consumers novel ways of cultivation and products with interesting taste, characteristics and nutritional value. Among the 30 000 plant species known, only five cereals currently provide more than 50% of the world's energy intake – bread wheat (Triticum aestivum), rice (Oryza sativa), sorghum (Sorghum bicolor), millets (Panicum sp.) and maize (Zea mays). The excessive utilization of these selected species has a great potential to cause genetic losses and difficulty in bridging future agricultural demands. Teff (Eragrostis tef), an ancient grain extensively cultivated in countries like Eritrea and Ethiopia, provides promising alternatives for new food uses since its nutritional value is significantly higher than most others cereal grains. The absence of gluten allows flexibility in food utilization since it can be directly substituted to gluten-containing products. The grain also offers an excellent balance of essential amino acids and minerals, which can fulfil the recommended daily intake and eliminates the need for fortification and enrichment. This review provides a general overview of the physical properties and nutritional composition of teff grains related to processing and applications in the food and feed industries. The current status of teff utilization, as well as the challenges in production and commercialization, and future opportunities is presented and discussed.  相似文献   
48.
Any knowledge extraction relies (possibly implicitly) on a hypothesis about the modelled-data dependence. The extracted knowledge ultimately serves to a decision-making (DM). DM always faces uncertainty and this makes probabilistic modelling adequate. The inspected black-box modeling deals with “universal” approximators of the relevant probabilistic model. Finite mixtures with components in the exponential family are often exploited. Their attractiveness stems from their flexibility, the cluster interpretability of components and the existence of algorithms for processing high-dimensional data streams. They are even used in dynamic cases with mutually dependent data records while regression and auto-regression mixture components serve to the dependence modeling. These dynamic models, however, mostly assume data-independent component weights, that is, memoryless transitions between dynamic mixture components. Such mixtures are not universal approximators of dynamic probabilistic models. Formally, this follows from the fact that the set of finite probabilistic mixtures is not closed with respect to the conditioning, which is the key estimation and predictive operation. The paper overcomes this drawback by using ratios of finite mixtures as universally approximating dynamic parametric models. The paper motivates them, elaborates their approximate Bayesian recursive estimation and reveals their application potential.  相似文献   
49.
Using dimer acid (DA) as raw material, DA diglycidyl ester (DADGE) was synthesized and used as reactive toughening agent to prepare paper-based copper clad laminate (p-CCL). The factors affecting the epoxy value of DADGE and the effect of the resin on the gelation time were studied. The effects of the epoxy value and the addition amount of DADGE on the solderleaching resistance, flammability, water absorption, bending strength, and impact strength of the p-CCL were discussed. The results showed when the molar ratio of DA to ECH was 1:8 and the molar ratio of DA to sodium hydroxide was 1:1.6, the epoxy value of DADGE reached the maximum value of 0.23 mol/100 g. The DADGE can shorten the gelation time of the glue. The p-CCL meets the performance of the IPC-TM-650 standard. And when the addition amount of DADGE is less than 12 wt %, the flammability of the p-CCL reaches UL94V-0 level. The p-CCL prepared by adding 6 wt % of DADGE with 0.08 mol/100 g epoxy value has the best comprehensive performance, its toughness and rigid are comparable to those of p-CCL with 12 wt % of commercially available high performance toughening agents and it has higher solderleaching resistance. © 2019 Wiley Periodicals, Inc. J. Appl. Polym. Sci. 2019 , 136, 47508.  相似文献   
50.
This study proposes a data‐driven operational control framework using machine learning‐based predictive modeling with the aim of decreasing the energy consumption of a natural gas sweetening process. This multi‐stage framework is composed of the following steps: (a) a clustering algorithm based on Density‐Based Spatial Clustering of Applications with Noise methodology is implemented to characterize the sampling space of all possible states of the operation and to determine the operational modes of the gas sweetening unit, (b) the lowest steam consumption of each operational mode is selected as a reference for operational control of the gas sweetening process, and (c) a number of high‐accuracy regression models are developed using the Gradient Boosting Machines algorithm for predicting the controlled parameters and output variables. This framework presents an operational control strategy that provides actionable insights about the energy performance of the current operations of the unit and also suggests the potential of energy saving for gas treating plant operators. The ultimate goal is to leverage this data‐driven strategy in order to identify the achievable energy conservation opportunity in such plants. The dataset for this research study consists of 29 817 records that were sampled over the course of 3 years from a gas train in the South Pars Gas Complex. Furthermore, our offline analysis demonstrates that there is a potential of 8% energy saving, equivalent to 5 760 000 Nm3 of natural gas consumption reduction, which can be achieved by mapping the steam consumption states of the unit to the best energy performances predicted by the proposed framework.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号